reviewer-paper pair
Counterfactual Evaluation of Peer-Review Assignment Policies Supplemental Material Martin Saveski, Steven Jecmen, Nihar B. Shah, Johan Ugander A Linear Programs for Peer-Review Assignment
Our estimators assume that there is no interference between the units, i.e., that the treatment of one The first assumption is quite realistic as in most peer review systems the reviewers cannot see other reviews until they submit their own. The second assumption is important to understand, as there could be "batch effects": a Monte Carlo methods to tightly estimate these covariances. AAAI datasets, we sampled 1 million assignments and computed the empirical covariance. In our setting, small amounts of attrition (relative to the number of policy-induced positivity violations) mean that the fraction of data that is missing is not exactly known before assignment, but almost. To get more robust estimates of the performance, we repeat this process 10 times.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
Counterfactual Evaluation of Peer-Review Assignment Policies Supplemental Material Martin Saveski, Steven Jecmen, Nihar B. Shah, Johan Ugander A Linear Programs for Peer-Review Assignment
Our estimators assume that there is no interference between the units, i.e., that the treatment of one The first assumption is quite realistic as in most peer review systems the reviewers cannot see other reviews until they submit their own. The second assumption is important to understand, as there could be "batch effects": a Monte Carlo methods to tightly estimate these covariances. AAAI datasets, we sampled 1 million assignments and computed the empirical covariance. In our setting, small amounts of attrition (relative to the number of policy-induced positivity violations) mean that the fraction of data that is missing is not exactly known before assignment, but almost. To get more robust estimates of the performance, we repeat this process 10 times.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments
Jecmen, Steven, Zhang, Hanrui, Liu, Ryan, Shah, Nihar B., Conitzer, Vincent, Fang, Fei
We consider three important challenges in conference peer review: (i) reviewers maliciously attempting to get assigned to certain papers to provide positive reviews, possibly as part of quid-pro-quo arrangements with the authors; (ii) "torpedo reviewing," where reviewers deliberately attempt to get assigned to certain papers that they dislike in order to reject them; (iii) reviewer de-anonymization on release of the similarities and the reviewer-assignment code. On the conceptual front, we identify connections between these three problems and present a framework that brings all these challenges under a common umbrella. We then present a (randomized) algorithm for reviewer assignment that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair. We further consider the problem of restricting the joint probability that certain suspect pairs of reviewers are assigned to certain papers, and show that this problem is NP-hard for arbitrary constraints on these joint probabilities but efficiently solvable for a practical special case. Finally, we experimentally evaluate our algorithms on datasets from past conferences, where we observe that they can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity. Our algorithms still achieve this similarity while also preventing reviewers with close associations from being assigned to the same paper.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Ontario > Toronto (0.04)